261 research outputs found

    Teaching Virtual Characters to use Body Language

    Get PDF
    Non-verbal communication, or ā€œbody languageā€, is a critical component in constructing believable virtual characters. Most often, body language is implemented by a set of ad-hoc rules.We propose a new method for authors to specify and refine their characterā€™s body-language responses. Using our method, the author watches the character acting in a situation, and provides simple feedback on-line. The character then learns to use its body language to maximize the rewards, based on a reinforcement learning algorithm

    Autonomous Secondary Gaze Behaviours

    Get PDF
    In this paper we describe secondary behaviour, this is behaviour that is generated autonomously for an avatar. The user will control various aspects of the avatars behaviour but a truly expressive avatar must produce more complex behaviour than a user could specify in real time. Secondary behaviour provides some of this expressive behaviour autonomously. However, though it is produced autonomously it must produce behaviour that is appropriate to the actions that the user is controlling (the primary behaviour) and it must produce behaviour that corresponds to what the user wants. We describe an architecture which achieves these to aims by tagging the primary behaviour with messages to be sent to the secondary behaviour and by allowing the user to design various aspects of the secondary behaviour before starting to use the avatar. We have implemented this general architecture in a system which adds gaze behaviour to user designed actions

    Integrating internal behavioural models with external expression

    Get PDF
    Users will believe in a virtual character more if they can empathise with it and understand what ā€˜makes it tickā€™. This will be helped by making the motivations of the character, and other processes that go towards creating its behaviour, clear to the user. This paper proposes that this can be achieved by linking the behavioural or cognitive system of the character to expressive behaviour. This idea is discussed in general and then demonstrated with an implementation that links a simulation of perception to the animation of a characterā€™s eyes

    Learnable Computing

    Get PDF
    We have to learn all new technologies and we continue to learn for as long as we use them and develop that use. Learning is therefore an integral part of human engagement with technology, as it is with all areas of life. This paper proposes that we should consider learning as an important part of all human computer interaction and that theories of learning can make an important contribution to HCI. It presents 6 vignettes that describe different ways in which this could happen: rethinking HCI concepts in terms of learning, applying learning theory to better understanding established ideas in HCI, using learning research to inform HCI practice, understanding how people learn software and inspiring us to rethink the aims of this discipline. This paper aims to start a conversation that could bring valuable new ideas into our "inter-discipline"

    Bodily Non-verbal Interaction with Virtual Characters

    Get PDF
    Alongside spoken communication human conversation has a non-verbal component that conveys complex and subtle emotional and interpersonal information. This information is conveyed largely bodily with postures, gestures and facial expression. In order to capture the Kansei aspects of human interaction within a virtual environment, it is therefore vital to model this bodily interaction. This type of interaction is largely subconscious and therefore difficult to model explicitly. We therefore propose a data-driven learning approach to creating characters capable of non-verbal bodily interaction with humans

    Learning Finite State Machine Controllers from Motion Capture Data

    Get PDF
    With characters in computer games and interactive media increasingly being based on real actors, the individuality of an actor's performance should not only be reflected in the appearance and animation of the character but also in the Artificial Intelligence that governs the character's behavior and interactions with the environment. Machine learning methods applied to motion capture data provide a way of doing this. This paper presents a method for learning the parameters of a Finite State Machine controller. The method learns both the transition probabilities of the Finite State Machine and also how to select animations based on the current state

    More than a bit of coding: (un-)Grounded (non-)Theory in HCI

    Get PDF
    Grounded Theory Methodology (GTM) is a powerful way to develop theories where there is little existing research using a flexible but rigorous empirically-based approach. Although it originates from the fields of social and health sciences, it is a field-agnostic methodology that can be used in any discipline. However, it tends to be misunderstood by researchers within HCI. This paper sets out to explain what GTM is, how it can be useful to HCI researchers, and examples of how it has been misapplied. There is an overview of the decades of methodological debate that surrounds GTM, why itā€™s important to be aware of this debate, and how GTM differs from other, better understood, qualitative methodologies. It is hoped the reader is left with a greater understanding of GTM, and better able to judge the results of research which claims to use GTM, but often does not
    • ā€¦
    corecore